Skip to main content

Home/ Future of the Web/ Group items tagged tools www web

Rss Feed Group items tagged

Paul Merrell

Last Call Working Draft -- W3C Authoring Tool Accessibility Guidelines (ATAG) 2.0 - 1 views

  • Examples of authoring tools: ATAG 2.0 applies to a wide variety of web content generating applications, including, but not limited to: web page authoring tools (e.g., WYSIWYG HTML editors) software for directly editing source code (see note below) software for converting to web content technologies (e.g., "Save as HTML" features in office suites) integrated development environments (e.g., for web application development) software that generates web content on the basis of templates, scripts, command-line input or "wizard"-type processes software for rapidly updating portions of web pages (e.g., blogging, wikis, online forums) software for generating/managing entire web sites (e.g., content management systems, courseware tools, content aggregators) email clients that send messages in web content technologies multimedia authoring tools debugging tools for web content software for creating mobile web applications
  • Web-based and non-web-based: ATAG 2.0 applies equally to authoring tools of web content that are web-based, non-web-based or a combination (e.g., a non-web-based markup editor with a web-based help system, a web-based content management system with a non-web-based file uploader client). Real-time publishing: ATAG 2.0 applies to authoring tools with workflows that involve real-time publishing of web content (e.g., some collaborative tools). For these authoring tools, conformance to Part B of ATAG 2.0 may involve some combination of real-time accessibility supports and additional accessibility supports available after the real-time authoring session (e.g., the ability to add captions for audio that was initially published in real-time). For more information, see the Implementing ATAG 2.0 - Appendix E: Real-time content production. Text Editors: ATAG 2.0 is not intended to apply to simple text editors that can be used to edit source content, but that include no support for the production of any particular web content technology. In contrast, ATAG 2.0 can apply to more sophisticated source content editors that support the production of specific web content technologies (e.g., with syntax checking, markup prediction, etc.).
  •  
    Link is the latest version link so page should update when this specification graduates to a W3C recommendation.
Paul Merrell

We're Halfway to Encrypting the Entire Web | Electronic Frontier Foundation - 0 views

  • The movement to encrypt the web has reached a milestone. As of earlier this month, approximately half of Internet traffic is now protected by HTTPS. In other words, we are halfway to a web safer from the eavesdropping, content hijacking, cookie stealing, and censorship that HTTPS can protect against. Mozilla recently reported that the average volume of encrypted web traffic on Firefox now surpasses the average unencrypted volume
  • Google Chrome’s figures on HTTPS usage are consistent with that finding, showing that over 50% of of all pages loaded are protected by HTTPS across different operating systems.
  • This milestone is a combination of HTTPS implementation victories: from tech giants and large content providers, from small websites, and from users themselves.
  • ...4 more annotations...
  • Starting in 2010, EFF members have pushed tech companies to follow crypto best practices. We applauded when Facebook and Twitter implemented HTTPS by default, and when Wikipedia and several other popular sites later followed suit. Google has also put pressure on the tech community by using HTTPS as a signal in search ranking algorithms and, starting this year, showing security warnings in Chrome when users load HTTP sites that request passwords or credit card numbers. EFF’s Encrypt the Web Report also played a big role in tracking and encouraging specific practices. Recently other organizations have followed suit with more sophisticated tracking projects. For example, Secure the News and Pulse track HTTPS progress among news media sites and U.S. government sites, respectively.
  • But securing large, popular websites is only one part of a much bigger battle. Encrypting the entire web requires HTTPS implementation to be accessible to independent, smaller websites. Let’s Encrypt and Certbot have changed the game here, making what was once an expensive, technically demanding process into an easy and affordable task for webmasters across a range of resource and skill levels. Let’s Encrypt is a Certificate Authority (CA) run by the Internet Security Research Group (ISRG) and founded by EFF, Mozilla, and the University of Michigan, with Cisco and Akamai as founding sponsors. As a CA, Let’s Encrypt issues and maintains digital certificates that help web users and their browsers know they’re actually talking to the site they intended to. CAs are crucial to secure, HTTPS-encrypted communication, as these certificates verify the association between an HTTPS site and a cryptographic public key. Through EFF’s Certbot tool, webmasters can get a free certificate from Let’s Encrypt and automatically configure their server to use it. Since we announced that Let’s Encrypt was the web’s largest certificate authority last October, it has exploded from 12 million certs to over 28 million. Most of Let’s Encrypt’s growth has come from giving previously unencrypted sites their first-ever certificates. A large share of these leaps in HTTPS adoption are also thanks to major hosting companies and platforms--like WordPress.com, Squarespace, and dozens of others--integrating Let’s Encrypt and providing HTTPS to their users and customers.
  • Unfortunately, you can only use HTTPS on websites that support it--and about half of all web traffic is still with sites that don’t. However, when sites partially support HTTPS, users can step in with the HTTPS Everywhere browser extension. A collaboration between EFF and the Tor Project, HTTPS Everywhere makes your browser use HTTPS wherever possible. Some websites offer inconsistent support for HTTPS, use unencrypted HTTP as a default, or link from secure HTTPS pages to unencrypted HTTP pages. HTTPS Everywhere fixes these problems by rewriting requests to these sites to HTTPS, automatically activating encryption and HTTPS protection that might otherwise slip through the cracks.
  • Our goal is a universally encrypted web that makes a tool like HTTPS Everywhere redundant. Until then, we have more work to do. Protect your own browsing and websites with HTTPS Everywhere and Certbot, and spread the word to your friends, family, and colleagues to do the same. Together, we can encrypt the entire web.
  •  
    HTTPS connections don't work for you if you don't use them. If you're not using HTTPS Everywhere in your browser, you should be; it's your privacy that is at stake. And every encrypted communication you make adds to the backlog of encrypted data that NSA and other internet voyeurs must process as encrypted traffic; because cracking encrypted messages is computer resource intensive, the voyeurs do not have the resources to crack more than a tiny fraction. HTTPS is a free extension for Firefox, Chrome, and Opera. You can get it here. https://www.eff.org/HTTPS-everywhere
Gonzalo San Gil, PhD.

Invisible Web: What it is, Why it exists, How to find it, and Its inherent ambiguity - 1 views

  •  
    [What is the "Invisible Web", a.k.a. the "Deep Web"? The "visible web" is what you can find using general web search engines. It's also what you see in almost all subject directories. The "invisible web" is what you cannot find using these types of tools. The first version of this web page was written in 2000, when this topic was new and baffling to many web searchers. Since then, search engines' crawlers and indexing programs have overcome many of the technical barriers that made it impossible for them to find "invisible" web pages. These types of pages used to be invisible but can now be found in most search engine results: Pages in non-HTML formats (pdf, Word, Excel, PowerPoint), now converted into HTML. Script-based pages, whose URLs contain a ? or other script coding. Pages generated dynamically by other types of database software (e.g., Active Server Pages, Cold Fusion). These can be indexed if there is a stable URL somewhere that search engine crawlers can find. ]
Gary Edwards

Out in the Open: Hackers Build a Skype That's Not Controlled by Microsoft | Enterprise ... - 0 views

shared by Gary Edwards on 04 Sep 14 - No Cached
  • The main thing the Tox team is trying to do, besides provide encryption, is create a tool that requires no central servers whatsoever—not even ones that you would host yourself. It relies on the same technology that BitTorrent uses to provide direct connections between users, so there’s no central hub to snoop on or take down.
  • Tox is trying to roll both peer-to-peer and voice calling into one.
  • Actually, it’s going a bit further than that. Tox is actually just a protocol for encrypted peer-to-peer data transmission.
  • ...2 more annotations...
  • Tox is just a tunnel to another node that’s encrypted and secure,” says David Lohle, a spokesperson for the project. “What you want to send over that pipe is up to your imagination.”
  • For example, one developer is building an e-mail replacement with the protocol, and Lohle says someone else is building an open source alternative to BitTorrent Sync.
  •  
    "The web forum 4chan is known mostly as a place to share juvenile and, to put it mildly, politically incorrect images. But it's also the birthplace of one of the latest attempts to subvert the NSA's mass surveillance program. When whistleblower Edward Snowden revealed that full extent of the NSA's activities last year, members of the site's tech forum started talking about the need for a more secure alternative to Skype. Soon, they'd opened a chat room to discuss the project and created an account on the code hosting and collaboration site GitHub and began uploading code. Eventually, they settled on the name Tox, and you can already download prototypes of the surprisingly easy-to-use tool. The tool is part of a widespread effort to create secure online communication tools that are controlled not only by any one company, but by the world at large-a continued reaction to the Snowden revelations. This includes everything from instant messaging tools to email services. It's too early to count on Tox to protect you from eavesdroppers and spies. Like so many other new tools, it's still in the early stages of development and has yet to receive the scrutiny that other security tools, such as the instant messaging encryption plugin Off The Record has. But it endeavors to carve a unique niche within the secure communications ecosystem."
Gary Edwards

InformationWeek 500 Trends: Web 2.0, Globalization, Virtualization, And More -- Informa... - 0 views

  • Web 2.0 is one of the trendiest ideas in tech, for instance, but there are entire industries where not one company in our survey cites it as a top productivity improver. Meantime, adoption of some more tactical technologies, such as WAN optimization, has exploded in the last year.
  • critical trends, from Web 2.0 to globalization to virtualization
  • the momentum is behind wikis, blogs, and social networking, though primarily among co-workers.
  • ...7 more annotations...
  • When it comes to using Web 2.0 collaboration tools
  • Use of hosted collaboration applications--from calendars to document sharing--hit a reasonably high 60%.
  • Asked what technologies have improved productivity the most, only 14% overall cite "encouraging the use of Web 2.0 technologies.
  • One possible bright spot in our survey is that implementing new collaboration tools, such as Microsoft SharePoint, is cited more often than any other--48%--as a technology leveraged to improve productivity.
  • at satisfied companies, business units rather than IT departments are much more likely to drive the selection of Web 2.0 technologies. At companies dissatisfied with Web 2.0, IT is more likely to take the lead.
  • There's more to Web 2.0 than collaboration tools like wikis and other employee-facing tools, and there's interesting progress on the critical back-end layer that enables Web 2.0. One is mashups; 38% of InformationWeek 500 companies are combining Web and enterprise content in new ways. The other is in Web 2.0 development tools.
  •  
    What the InformationWeek 500 data tells us about the use of emerging technologies.
Paul Merrell

Web video accessibility from EmbedPlus on 2011-08-11 (w3c-wai-ig@w3.org from July to Se... - 0 views

  •  
    For those who care about Web accessibility, here is an opportunity to provide feedback on some accessibility tools for one of the most widely-used web services. The message deserves wide distribution. The contact email address is on the linked page.  The linked tool set should also be of interest to those doing mashups or embedding YouTube videos in web pages. Hi all, I'm the co-developer a YouTube third-party tool called EmbedPlus. It enhances the standard YouTube player with many features that aren't inherently supported. We've been getting lots of feedback regarding the accessibility benefits of some of these features like movable zoom, slow motion, and even third-party annotations. As the tool continues to grow in popularity, the importance of its accessibility rises. I decided to do some research and found the WAI Interest group to be a major proponent of accessibility on the web. If anyone has time to take a look at EmbedPlus and share feedback that could help improve the tool, please do. Here's the link: http://www.embedplus.com/ Thank you in advance, Tay
Gary Edwards

The NeuroCommons Project: Open RDF Ontologies for Scientific Reseach - 0 views

  •  
    The NeuroCommons project seeks to make all scientific research materials - research articles, annotations, data, physical materials - as available and as useable as they can be. This is done by fostering practices that render information in a form that promotes uniform access by computational agents - sometimes called "interoperability". Semantic Web practices based on RDF will enable knowledge sources to combine meaningfully, semantically precise queries that span multiple information sources.

    Working with the Creative Commons group that sponsors "Neurocommons", Microsoft has developed and released an open source "ontology" add-on for Microsoft Word. The add-on makes use of MSOffice XML panel, Open XML formats, and proprietary "Smart Tags". Microsoft is also making the source code for both the Ontology Add-in for Office Word 2007 and the Creative Commons Add-in for Office Word 2007 tool available under the Open Source Initiative (OSI)-approved Microsoft Public License (Ms-PL) at http://ucsdbiolit.codeplex.com and http://ccaddin2007.codeplex.com,respectively.

    No doubt it will take some digging to figure out what is going on here. Microsoft WPF technologies include Smart Tags and LINQ. The Creative Commons "Neurocommons" ontology work is based on W3C RDF and SPARQL. How these opposing technologies interoperate with legacy MSOffice 2003 and 2007 desktops is an interesting question. One that may hold the answer to the larger problem of re-purposing MSOffice for the Open Web?

    We know Microsoft is re-purposing MSOffice for the MS Web. Perhaps this work with Creative Commons will help to open up the Microsoft desktop productivity environment to the Open Web? One can always hope :)

    Dr Dobbs has the Microsoft - Creative Commons announcement; Microsoft Releases Open Tools for Scientific Research ...... Joins Creative Commons in releasing the Ontology Add-in
Gonzalo San Gil, PhD.

How to Access Linux Server Terminal in Web Browser Using 'Wetty (Web + tty)' Tool - 0 views

  •  
    "Wouldn't it be fantastic if there was a way to access a remote Linux server directly from the web browser? Luckily for us all, there is a tool called Wetty (Web + tty) that allows us to do just that - without the need to switch programs and all from the same web browser window."
Paul Merrell

Exclusive: Tim Berners-Lee tells us his radical new plan to upend the - 1 views

  • “The intent is world domination,” Berners-Lee says with a wry smile. The British-born scientist is known for his dry sense of humor. But in this case, he is not joking.This week, Berners-Lee will launch Inrupt, a startup that he has been building, in stealth mode, for the past nine months. Backed by Glasswing Ventures, its mission is to turbocharge a broader movement afoot, among developers around the world, to decentralize the web and take back power from the forces that have profited from centralizing it. In other words, it’s game on for Facebook, Google, Amazon. For years now, Berners-Lee and other internet activists have been dreaming of a digital utopia where individuals control their own data and the internet remains free and open. But for Berners-Lee, the time for dreaming is over.
  • In a post published this weekend, Berners-Lee explains that he is taking a sabbatical from MIT to work full time on Inrupt. The company will be the first major commercial venture built off of Solid, a decentralized web platform he and others at MIT have spent years building.
  • f all goes as planned, Inrupt will be to Solid what Netscape once was for many first-time users of the web: an easy way in. And like with Netscape, Berners-Lee hopes Inrupt will be just the first of many companies to emerge from Solid.
  • ...4 more annotations...
  • On his screen, there is a simple-looking web page with tabs across the top: Tim’s to-do list, his calendar, chats, address book. He built this app–one of the first on Solid–for his personal use. It is simple, spare. In fact, it’s so plain that, at first glance, it’s hard to see its significance. But to Berners-Lee, this is where the revolution begins. The app, using Solid’s decentralized technology, allows Berners-Lee to access all of his data seamlessly–his calendar, his music library, videos, chat, research. It’s like a mashup of Google Drive, Microsoft Outlook, Slack, Spotify, and WhatsApp.The difference here is that, on Solid, all the information is under his control. Every bit of data he creates or adds on Solid exists within a Solid pod–which is an acronym for personal online data store. These pods are what give Solid users control over their applications and information on the web. Anyone using the platform will get a Solid identity and Solid pod. This is how people, Berners-Lee says, will take back the power of the web from corporations.
  • For example, one idea Berners-Lee is currently working on is a way to create a decentralized version of Alexa, Amazon’s increasingly ubiquitous digital assistant. He calls it Charlie. Unlike with Alexa, on Charlie people would own all their data. That means they could trust Charlie with, for example, health records, children’s school events, or financial records. That is the kind of machine Berners-Lee hopes will spring up all over Solid to flip the power dynamics of the web from corporation to individuals.
  • Berners-Lee believes Solid will resonate with the global community of developers, hackers, and internet activists who bristle over corporate and government control of the web. “Developers have always had a certain amount of revolutionary spirit,” he observes. Circumventing government spies or corporate overlords may be the initial lure of Solid, but the bigger draw will be something even more appealing to hackers: freedom. In the centralized web, data is kept in silos–controlled by the companies that build them, like Facebook and Google. In the decentralized web, there are no silos.Starting this week, developers around the world will be able to start building their own decentralized apps with tools through the Inrupt site. Berners-Lee will spend this fall crisscrossing the globe, giving tutorials and presentations to developers about Solid and Inrupt.
  • When asked about this, Berners-Lee says flatly: “We are not talking to Facebook and Google about whether or not to introduce a complete change where all their business models are completely upended overnight. We are not asking their permission.”Game on.
Gary Edwards

RDFa, Drupal and a Practical Semantic Web - 1 views

  •  
    CMSWire has a brief explanation of RDFa and why it's important. RDFa is also finding it's way into the Drupal CMS, which could be a game changer. Timothy Berners-Lee vision of a "Semantic Web" where the meaning of content is understood by both humans and machines depends on the emergence of capable information systems that make it transparently easy to add semantic markup. I'm not surprised that Drupal is jumping with both feet.

    "... In the march toward creating the semantic web, web content management systems such as Drupal (news, site) and many proprietary vendors struggle with the goal of emitting structured information that other sites and tools can usefully consume. There's a balance to be struck between human and machine utility, not to mention simplicity of instrumentation.

    With RDFa (see W3C proposal),  software and web developers have the specification they need to know how to structure data in order to lend meaning both to machines and to humans, all in a single file. And from what we've seen recently, the Drupal community is making the best of it.
Gonzalo San Gil, PhD.

No, Department of Justice, 80 Percent of Tor Traffic Is Not Child Porn | WIRED [# ! Via... - 0 views

  • The debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography.
  • he debate over online anonymity, and all the whistleblowers, trolls, anarchists, journalists and political dissidents it enables, is messy enough. It doesn’t need the US government making up bogus statistics about how much that anonymity facilitates child pornography. At the State of the Net conference in Washington on Tuesday, US assistant attorney general Leslie Caldwell discussed what she described as the dangers of encryption and cryptographic anonymity tools like Tor, and how those tools can hamper law enforcement. Her statements are the latest in a growing drumbeat of federal criticism of tech companies and software projects that provide privacy and anonymity at the expense of surveillance. And as an example of the grave risks presented by that privacy, she cited a study she said claimed an overwhelming majority of Tor’s anonymous traffic relates to pedophilia. “Tor obviously was created with good intentions, but it’s a huge problem for law enforcement,” Caldwell said in comments reported by Motherboard and confirmed to me by others who attended the conference. “We understand 80 percent of traffic on the Tor network involves child pornography.” That statistic is horrifying. It’s also baloney.
  • In a series of tweets that followed Caldwell’s statement, a Department of Justice flack said Caldwell was citing a University of Portsmouth study WIRED covered in December. He included a link to our story. But I made clear at the time that the study claimed 80 percent of traffic to Tor hidden services related to child pornography, not 80 percent of all Tor traffic. That is a huge, and important, distinction. The vast majority of Tor’s users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what’s often referred to as the “dark web,” use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software’s creators at the non-profit Tor Project. The University of Portsmouth study dealt exclusively with visits to hidden services. In contrast to Caldwell’s 80 percent claim, the Tor Project’s director Roger Dingledine pointed out last month that the study’s pedophilia findings refer to something closer to a single percent of Tor’s overall traffic.
  • ...1 more annotation...
  • So to whoever at the Department of Justice is preparing these talking points for public consumption: Thanks for citing my story. Next time, please try reading it.
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
  •  
    [# Via Paul Merrell's Diigo...] "That is a huge, and important, distinction. The vast majority of Tor's users run the free anonymity software while visiting conventional websites, using it to route their traffic through encrypted hops around the globe to avoid censorship and surveillance. But Tor also allows websites to run Tor, something known as a Tor hidden service. This collection of hidden sites, which comprise what's often referred to as the "dark web," use Tor to obscure the physical location of the servers that run them. Visits to those dark web sites account for only 1.5 percent of all Tor traffic, according to the software's creators at the non-profit Tor Project."
Gary Edwards

Introduction to OpenCalais | OpenCalais - 0 views

  •  
    "The free OpenCalais service and open API is the fastest way to tag the people, places, facts and events in your content.  It can help you improve your SEO, increase your reader engagement, create search-engine-friendly 'topic hubs' and streamline content operations - saving you time and money. OpenCalais is free to use in both commercial and non-commercial settings, but can only be used on public content (don't run your confidential or competitive company information through it!). OpenCalais does not keep a copy of your content, but it does keep a copy of the metadata it extracts there from. To repeat, OpenCalais is not a private service, and there is no secure, enterprise version that you can buy to operate behind a firewall. It is your responsibility to police the content that you submit, so make sure you are comfortable with our Terms of Service (TOS) before you jump in. You can process up to 50,000 documents per day (blog posts, news stories, Web pages, etc.) free of charge.  If you need to process more than that - say you are an aggregator or a media monitoring service - then see this page to learn about Calais Professional. We offer a very affordable license. OpenCalais' early adopters include CBS Interactive / CNET, Huffington Post, Slate, Al Jazeera, The New Republic, The White House and more. Already more than 30,000 developers have signed up, and more than 50 publishers and 75 entrepreneurs are using the free service to help build their businesses. You can read about the pioneering work of these publishers, entrepreneurs and developers here. To get started, scroll to the bottom section of this page. To build OpenCalais into an existing site or publishing platform (CMS), you will need to work with your developers.  Why OpenCalais Matters The reason OpenCalais - and so-called "Web 3.0" in general (concepts like the Semantic Web, Linked Data, etc.) - are important is that these technologies make it easy to automatically conne
Gary Edwards

Sun pitches new cloud as 'Open Platform' * - 0 views

  •  
    Sun takes on the problem of interoperability and portability of applications in a world where there will be many many clouds. At the roll out of the Sun Cloud, key executives explain Sun's implementation of Open Cloud API's and what they see as a pressing need for management tools that will allow some standardization across clouds.

    Sun's Open Cloud API plan is a clean reuse of existing Open Web API's.

    "..... The underpinning of the Open Cloud Platform that Sun will be pitching to developers is a set of cloud APIs, the creation of which is focused under Project Kenai and which has been released under a Community Commons open source license. Sun wants lots of feedback on the APIs and wants these APIs to become a standard too, hence the open license. These APIs describes how virtual elements in a cloud are created, started, stopped, and hibernated using HTTP commands such as GET, PUT, and POST...."

    "...... The upshot is that these APIs will allow programmatic access to virtual infrastructure from Java, PHP, Python, and Ruby and that means system admins can script how virtual resources are deployed. The APIs, as co-creator Tim Bray explains in his blog, are written in JavaScript Object Notation (JSON), not XML. The Q-Layer software is a graphical representation of what is going on down in the APIs, and you can moving virtual resources into the cloud with a click of a mouse using the dashboard or programmatically using the APIs from those four programming languages listed above. (PHP support is not yet available, but will be)....."
  •  
    I can see why Sun picked those four languages first. Can I assume that with a bit of work, this API will be usable from any language with a C "foreign function interface", such as Perl, Common Lisp, Bourne shell, Squeak Smalltalk, and others that your server application might be written in?
  •  
    I read this comment that largely answers my question at: http://www.tbray.org/ongoing/When/200x/2009/03/16/Sun-Cloud "So right now JSON out of a shell tool is not so good. More things like this will create pressure for development of tools to change that, but years of widespread XML/HTML deployment have only produced a few oddly maintained tools. Perhaps that's because you can scrape quite a bit of the web with a couple sed passes, and if I were to have to deal with the mentioned tools, that's probably the route I'd take." (seth w. klein) In other words, with a bit of work, _anything_ that can talk text over HTTP can do this with a bit of work, but an object-oriented is likely to be more at home with JSON (JavaScript Object Notation)
Gonzalo San Gil, PhD.

10 Search Engines to Explore the Invisible Web - 6 views

  •  
    [The Invisible Web refers to the part of the WWW that's not indexed by the search engines. Most of us think that that search powerhouses like Google and Bing are like the Great Oracle"¦they see everything. Unfortunately, they can't because they aren't divine at all; they are just web spiders who index pages by following one hyperlink after the other.]
Gonzalo San Gil, PhD.

Etherpad - A Real Time Web Based Online Collaborative Document Editor for Linux - 0 views

  •  
    "Etherpad is a web based free document editor tool which allows a group of users to work jointly on a document in a real time, like a multi player editor which runs on a web browser. Etherpad authors can edit and at the same time see each others edits in real time with a capability to display author's text in their own colours."
Rana Adeel

Top Benifits and Uses of Google Analytics - 0 views

  •  
    If you are a internet developer you must aware of Google Analytics but if you are a new comer in this internet market Google Analytics is a tool you must be aware of. Google Analytics is a free tool which can be used to track the information about the way visitor interact with your web site. It tracks the performance of your keywords in order to have successful SEO ratings.
Paul Merrell

Information Warfare: Automated Propaganda and Social Media Bots | Global Research - 0 views

  • NATO has announced that it is launching an “information war” against Russia. The UK publicly announced a battalion of keyboard warriors to spread disinformation. It’s well-documented that the West has long used false propaganda to sway public opinion. Western military and intelligence services manipulate social media to counter criticism of Western policies. Such manipulation includes flooding social media with comments supporting the government and large corporations, using armies of sock puppets, i.e. fake social media identities. See this, this, this, this and this. In 2013, the American Congress repealed the formal ban against the deployment of propaganda against U.S. citizens living on American soil. So there’s even less to constrain propaganda than before.
  • Information warfare for propaganda purposes also includes: The Pentagon, Federal Reserve and other government entities using software to track discussion of political issues … to try to nip dissent in the bud before it goes viral “Controlling, infiltrating, manipulating and warping” online discourse Use of artificial intelligence programs to try to predict how people will react to propaganda
  • Some of the propaganda is spread by software programs. We pointed out 6 years ago that people were writing scripts to censor hard-hitting information from social media. One of America’s top cyber-propagandists – former high-level military information officer Joel Harding – wrote in December: I was in a discussion today about information being used in social media as a possible weapon.  The people I was talking with have a tool which scrapes social media sites, gauges their sentiment and gives the user the opportunity to automatically generate a persuasive response. Their tool is called a “Social Networking Influence Engine”. *** The implications seem to be profound for the information environment. *** The people who own this tool are in the civilian world and don’t even remotely touch the defense sector, so getting approval from the US Department of State might not even occur to them.
  • ...2 more annotations...
  • How Can This Real? Gizmodo reported in 2010: Software developer Nigel Leck got tired rehashing the same 140-character arguments against climate change deniers, so he programmed a bot that does the work for him. With citations! Leck’s bot, @AI_AGW, doesn’t just respond to arguments directed at Leck himself, it goes out and picks fights. Every five minutes it trawls Twitter for terms and phrases that commonly crop up in Tweets that refute human-caused climate change. It then searches its database of hundreds to find a counter-argument best suited for that tweet—usually a quick statement and a link to a scientific source. As can be the case with these sorts of things, many of the deniers don’t know they’ve been targeted by a robot and engage AI_AGW in debate. The bot will continue to fire back canned responses that best fit the interlocutor’s line of debate—Leck says this goes on for days, in some cases—and the bot’s been outfitted with a number of responses on the topic of religion, where the arguments unsurprisingly often end up. Technology has come a long way in the past 5 years. So if a lone programmer could do this 5 years ago, imagine what he could do now. And the big players have a lot more resources at their disposal than a lone climate activist/software developer does.  For example, a government expert told the Washington Post that the government “quite literally can watch your ideas form as you type” (and see this).  So if the lone programmer is doing it, it’s not unreasonable to assume that the big boys are widely doing it.
  • How Effective Are Automated Comments? Unfortunately, this is more effective than you might assume … Specifically, scientists have shown that name-calling and swearing breaks down people’s ability to think rationally … and intentionally sowing discord and posting junk comments to push down insightful comments  are common propaganda techniques. Indeed, an automated program need not even be that sophisticated … it can copy a couple of words from the main post or a comment, and then spew back one or more radioactive labels such as “terrorist”, “commie”, “Russia-lover”, “wimp”, “fascist”, “loser”, “traitor”, “conspiratard”, etc. Given that Harding and his compadres consider anyone who questions any U.S. policies as an enemy of the state  – as does the Obama administration (and see this) – many honest, patriotic writers and commenters may be targeted for automated propaganda comments.
Paul Merrell

From Radio to Porn, British Spies Track Web Users' Online Identities - 1 views

  • HERE WAS A SIMPLE AIM at the heart of the top-secret program: Record the website browsing habits of “every visible user on the Internet.” Before long, billions of digital records about ordinary people’s online activities were being stored every day. Among them were details cataloging visits to porn, social media and news websites, search engines, chat forums, and blogs. The mass surveillance operation — code-named KARMA POLICE — was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ. The revelations about the scope of the British agency’s surveillance are contained in documents obtained by The Intercept from National Security Agency whistleblower Edward Snowden. Previous reports based on the leaked files have exposed how GCHQ taps into Internet cables to monitor communications on a vast scale, but many details about what happens to the data after it has been vacuumed up have remained unclear.
  • Amid a renewed push from the U.K. government for more surveillance powers, more than two dozen documents being disclosed today by The Intercept reveal for the first time several major strands of GCHQ’s existing electronic eavesdropping capabilities.
  • The surveillance is underpinned by an opaque legal regime that has authorized GCHQ to sift through huge archives of metadata about the private phone calls, emails and Internet browsing logs of Brits, Americans, and any other citizens — all without a court order or judicial warrant
  • ...17 more annotations...
  • A huge volume of the Internet data GCHQ collects flows directly into a massive repository named Black Hole, which is at the core of the agency’s online spying operations, storing raw logs of intercepted material before it has been subject to analysis. Black Hole contains data collected by GCHQ as part of bulk “unselected” surveillance, meaning it is not focused on particular “selected” targets and instead includes troves of data indiscriminately swept up about ordinary people’s online activities. Between August 2007 and March 2009, GCHQ documents say that Black Hole was used to store more than 1.1 trillion “events” — a term the agency uses to refer to metadata records — with about 10 billion new entries added every day. As of March 2009, the largest slice of data Black Hole held — 41 percent — was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.
  • Throughout this period, as smartphone sales started to boom, the frequency of people’s Internet use was steadily increasing. In tandem, British spies were working frantically to bolster their spying capabilities, with plans afoot to expand the size of Black Hole and other repositories to handle an avalanche of new data. By 2010, according to the documents, GCHQ was logging 30 billion metadata records per day. By 2012, collection had increased to 50 billion per day, and work was underway to double capacity to 100 billion. The agency was developing “unprecedented” techniques to perform what it called “population-scale” data mining, monitoring all communications across entire countries in an effort to detect patterns or behaviors deemed suspicious. It was creating what it said would be, by 2013, “the world’s biggest” surveillance engine “to run cyber operations and to access better, more valued data for customers to make a real world difference.”
  • A document from the GCHQ target analysis center (GTAC) shows the Black Hole repository’s structure.
  • The data is searched by GCHQ analysts in a hunt for behavior online that could be connected to terrorism or other criminal activity. But it has also served a broader and more controversial purpose — helping the agency hack into European companies’ computer networks. In the lead up to its secret mission targeting Netherlands-based Gemalto, the largest SIM card manufacturer in the world, GCHQ used MUTANT BROTH in an effort to identify the company’s employees so it could hack into their computers. The system helped the agency analyze intercepted Facebook cookies it believed were associated with Gemalto staff located at offices in France and Poland. GCHQ later successfully infiltrated Gemalto’s internal networks, stealing encryption keys produced by the company that protect the privacy of cell phone communications.
  • Similarly, MUTANT BROTH proved integral to GCHQ’s hack of Belgian telecommunications provider Belgacom. The agency entered IP addresses associated with Belgacom into MUTANT BROTH to uncover information about the company’s employees. Cookies associated with the IPs revealed the Google, Yahoo, and LinkedIn accounts of three Belgacom engineers, whose computers were then targeted by the agency and infected with malware. The hacking operation resulted in GCHQ gaining deep access into the most sensitive parts of Belgacom’s internal systems, granting British spies the ability to intercept communications passing through the company’s networks.
  • In March, a U.K. parliamentary committee published the findings of an 18-month review of GCHQ’s operations and called for an overhaul of the laws that regulate the spying. The committee raised concerns about the agency gathering what it described as “bulk personal datasets” being held about “a wide range of people.” However, it censored the section of the report describing what these “datasets” contained, despite acknowledging that they “may be highly intrusive.” The Snowden documents shine light on some of the core GCHQ bulk data-gathering programs that the committee was likely referring to — pulling back the veil of secrecy that has shielded some of the agency’s most controversial surveillance operations from public scrutiny. KARMA POLICE and MUTANT BROTH are among the key bulk collection systems. But they do not operate in isolation — and the scope of GCHQ’s spying extends far beyond them.
  • The agency operates a bewildering array of other eavesdropping systems, each serving its own specific purpose and designated a unique code name, such as: SOCIAL ANTHROPOID, which is used to analyze metadata on emails, instant messenger chats, social media connections and conversations, plus “telephony” metadata about phone calls, cell phone locations, text and multimedia messages; MEMORY HOLE, which logs queries entered into search engines and associates each search with an IP address; MARBLED GECKO, which sifts through details about searches people have entered into Google Maps and Google Earth; and INFINITE MONKEYS, which analyzes data about the usage of online bulletin boards and forums. GCHQ has other programs that it uses to analyze the content of intercepted communications, such as the full written body of emails and the audio of phone calls. One of the most important content collection capabilities is TEMPORA, which mines vast amounts of emails, instant messages, voice calls and other communications and makes them accessible through a Google-style search tool named XKEYSCORE.
  • As of September 2012, TEMPORA was collecting “more than 40 billion pieces of content a day” and it was being used to spy on people across Europe, the Middle East, and North Africa, according to a top-secret memo outlining the scope of the program. The existence of TEMPORA was first revealed by The Guardian in June 2013. To analyze all of the communications it intercepts and to build a profile of the individuals it is monitoring, GCHQ uses a variety of different tools that can pull together all of the relevant information and make it accessible through a single interface. SAMUEL PEPYS is one such tool, built by the British spies to analyze both the content and metadata of emails, browsing sessions, and instant messages as they are being intercepted in real time. One screenshot of SAMUEL PEPYS in action shows the agency using it to monitor an individual in Sweden who visited a page about GCHQ on the U.S.-based anti-secrecy website Cryptome.
  • Partly due to the U.K.’s geographic location — situated between the United States and the western edge of continental Europe — a large amount of the world’s Internet traffic passes through its territory across international data cables. In 2010, GCHQ noted that what amounted to “25 percent of all Internet traffic” was transiting the U.K. through some 1,600 different cables. The agency said that it could “survey the majority of the 1,600” and “select the most valuable to switch into our processing systems.”
  • According to Joss Wright, a research fellow at the University of Oxford’s Internet Institute, tapping into the cables allows GCHQ to monitor a large portion of foreign communications. But the cables also transport masses of wholly domestic British emails and online chats, because when anyone in the U.K. sends an email or visits a website, their computer will routinely send and receive data from servers that are located overseas. “I could send a message from my computer here [in England] to my wife’s computer in the next room and on its way it could go through the U.S., France, and other countries,” Wright says. “That’s just the way the Internet is designed.” In other words, Wright adds, that means “a lot” of British data and communications transit across international cables daily, and are liable to be swept into GCHQ’s databases.
  • A map from a classified GCHQ presentation about intercepting communications from undersea cables. GCHQ is authorized to conduct dragnet surveillance of the international data cables through so-called external warrants that are signed off by a government minister. The external warrants permit the agency to monitor communications in foreign countries as well as British citizens’ international calls and emails — for example, a call from Islamabad to London. They prohibit GCHQ from reading or listening to the content of “internal” U.K. to U.K. emails and phone calls, which are supposed to be filtered out from GCHQ’s systems if they are inadvertently intercepted unless additional authorization is granted to scrutinize them. However, the same rules do not apply to metadata. A little-known loophole in the law allows GCHQ to use external warrants to collect and analyze bulk metadata about the emails, phone calls, and Internet browsing activities of British people, citizens of closely allied countries, and others, regardless of whether the data is derived from domestic U.K. to U.K. communications and browsing sessions or otherwise. In March, the existence of this loophole was quietly acknowledged by the U.K. parliamentary committee’s surveillance review, which stated in a section of its report that “special protection and additional safeguards” did not apply to metadata swept up using external warrants and that domestic British metadata could therefore be lawfully “returned as a result of searches” conducted by GCHQ.
  • Perhaps unsurprisingly, GCHQ appears to have readily exploited this obscure legal technicality. Secret policy guidance papers issued to the agency’s analysts instruct them that they can sift through huge troves of indiscriminately collected metadata records to spy on anyone regardless of their nationality. The guidance makes clear that there is no exemption or extra privacy protection for British people or citizens from countries that are members of the Five Eyes, a surveillance alliance that the U.K. is part of alongside the U.S., Canada, Australia, and New Zealand. “If you are searching a purely Events only database such as MUTANT BROTH, the issue of location does not occur,” states one internal GCHQ policy document, which is marked with a “last modified” date of July 2012. The document adds that analysts are free to search the databases for British metadata “without further authorization” by inputing a U.K. “selector,” meaning a unique identifier such as a person’s email or IP address, username, or phone number. Authorization is “not needed for individuals in the U.K.,” another GCHQ document explains, because metadata has been judged “less intrusive than communications content.” All the spies are required to do to mine the metadata troves is write a short “justification” or “reason” for each search they conduct and then click a button on their computer screen.
  • Intelligence GCHQ collects on British persons of interest is shared with domestic security agency MI5, which usually takes the lead on spying operations within the U.K. MI5 conducts its own extensive domestic surveillance as part of a program called DIGINT (digital intelligence).
  • GCHQ’s documents suggest that it typically retains metadata for periods of between 30 days to six months. It stores the content of communications for a shorter period of time, varying between three to 30 days. The retention periods can be extended if deemed necessary for “cyber defense.” One secret policy paper dated from January 2010 lists the wide range of information the agency classes as metadata — including location data that could be used to track your movements, your email, instant messenger, and social networking “buddy lists,” logs showing who you have communicated with by phone or email, the passwords you use to access “communications services” (such as an email account), and information about websites you have viewed.
  • Records showing the full website addresses you have visited — for instance, www.gchq.gov.uk/what_we_do — are treated as content. But the first part of an address you have visited — for instance, www.gchq.gov.uk — is treated as metadata. In isolation, a single metadata record of a phone call, email, or website visit may not reveal much about a person’s private life, according to Ethan Zuckerman, director of Massachusetts Institute of Technology’s Center for Civic Media. But if accumulated and analyzed over a period of weeks or months, these details would be “extremely personal,” he told The Intercept, because they could reveal a person’s movements, habits, religious beliefs, political views, relationships, and even sexual preferences. For Zuckerman, who has studied the social and political ramifications of surveillance, the most concerning aspect of large-scale government data collection is that it can be “corrosive towards democracy” — leading to a chilling effect on freedom of expression and communication. “Once we know there’s a reasonable chance that we are being watched in one fashion or another it’s hard for that not to have a ‘panopticon effect,’” he said, “where we think and behave differently based on the assumption that people may be watching and paying attention to what we are doing.”
  • When compared to surveillance rules in place in the U.S., GCHQ notes in one document that the U.K. has “a light oversight regime.” The more lax British spying regulations are reflected in secret internal rules that highlight greater restrictions on how NSA databases can be accessed. The NSA’s troves can be searched for data on British citizens, one document states, but they cannot be mined for information about Americans or other citizens from countries in the Five Eyes alliance. No such constraints are placed on GCHQ’s own databases, which can be sifted for records on the phone calls, emails, and Internet usage of Brits, Americans, and citizens from any other country. The scope of GCHQ’s surveillance powers explain in part why Snowden told The Guardian in June 2013 that U.K. surveillance is “worse than the U.S.” In an interview with Der Spiegel in July 2013, Snowden added that British Internet cables were “radioactive” and joked: “Even the Queen’s selfies to the pool boy get logged.”
  • In recent years, the biggest barrier to GCHQ’s mass collection of data does not appear to have come in the form of legal or policy restrictions. Rather, it is the increased use of encryption technology that protects the privacy of communications that has posed the biggest potential hindrance to the agency’s activities. “The spread of encryption … threatens our ability to do effective target discovery/development,” says a top-secret report co-authored by an official from the British agency and an NSA employee in 2011. “Pertinent metadata events will be locked within the encrypted channels and difficult, if not impossible, to prise out,” the report says, adding that the agencies were working on a plan that would “(hopefully) allow our Internet Exploitation strategy to prevail.”
Gonzalo San Gil, PhD.

5 Linux Command Line Based Tools for Downloading Files and Browsing Websites - 0 views

  •  
    "Linux command-line, the most adventurous and fascinating part of GNU/Linux is very cool and powerful tool. Command line itself is very productive and the availability of various inbuilt and third party command line application makes Linux robust and powerfu"
  •  
    "Linux command-line, the most adventurous and fascinating part of GNU/Linux is very cool and powerful tool. Command line itself is very productive and the availability of various inbuilt and third party command line application makes Linux robust and powerfu"
Paul Merrell

Mozilla Acquires Pocket | The Mozilla Blog - 0 views

  • e are excited to announce that the Mozilla Corporation has completed the acquisition of Read It Later, Inc. the developers of Pocket. Mozilla is growing, experimenting more, and doubling down on our mission to keep the internet healthy, as a global public resource that’s open and accessible to all. As our first strategic acquisition, Pocket contributes to our strategy by growing our mobile presence and providing people everywhere with powerful tools to discover and access high quality web content, on their terms, independent of platform or content silo. Pocket will join Mozilla’s product portfolio as a new product line alongside the Firefox web browsers with a focus on promoting the discovery and accessibility of high quality web content. (Here’s a link to their blog post on the acquisition).  Pocket’s core team and technology will also accelerate Mozilla’s broader Context Graph initiative.
  • “We believe that the discovery and accessibility of high quality web content is key to keeping the internet healthy by fighting against the rising tide of centralization and walled gardens. Pocket provides people with the tools they need to engage with and share content on their own terms, independent of hardware platform or content silo, for a safer, more empowered and independent online experience.” – Chris Beard, Mozilla CEO Pocket brings to Mozilla a successful human-powered content recommendation system with 10 million unique monthly active users on iOS, Android and the Web, and with more than 3 billion pieces of content saved to date. In working closely with Pocket over the last year around the integration within Firefox, we developed a shared vision and belief in the opportunity to do more together that has led to Pocket joining Mozilla today. “We’ve really enjoyed partnering with Mozilla over the past year. We look forward to working more closely together to support the ongoing growth of Pocket and to create great new products that people love in support of our shared mission.” – Nate Weiner, Pocket CEO As a result of this strategic acquisition, Pocket will become a wholly owned subsidiary of Mozilla Corporation and will become part of the Mozilla open source project.
1 - 20 of 73 Next › Last »
Showing 20 items per page